Most existing Spiking Neural Network (SNN) works state that SNNs may utilize temporal information dynamics of spikes. However, an explicit analysis of temporal information dynamics is still missing. In this paper, we ask several important questions for providing a fundamental understanding of SNNs: What are temporal information dynamics inside SNNs? How can we measure the temporal information dynamics? How do the temporal information dynamics affect the overall learning performance? To answer these questions, we estimate the Fisher Information of the weights to measure the distribution of temporal information during training in an empirical manner. Surprisingly, as training goes on, Fisher information starts to concentrate in the early timesteps. After training, we observe that information becomes highly concentrated in earlier few timesteps, a phenomenon we refer to as temporal information concentration. We observe that the temporal information concentration phenomenon is a common learning feature of SNNs by conducting extensive experiments on various configurations such as architecture, dataset, optimization strategy, time constant, and timesteps. Furthermore, to reveal how temporal information concentration affects the performance of SNNs, we design a loss function to change the trend of temporal information. We find that temporal information concentration is crucial to building a robust SNN but has little effect on classification accuracy. Finally, we propose an efficient iterative pruning method based on our observation on temporal information concentration. Code is available at https://github.com/Intelligent-Computing-Lab-Yale/Exploring-Temporal-Information-Dynamics-in-Spiking-Neural-Networks.
translated by 谷歌翻译
尖峰神经网络(SNNS)最近成为新一代的低功耗深神经网络,二进制尖峰在多个时间步中传达信息。 SNN的修剪非常重要,因为它们被部署在资源控制移动/边缘设备上。先前的SNN修剪作品的重点是浅SNN(2〜6层),但是,最深层的SNN(> 16层)是由最先进的SNN作品提出的,这很难与当前的修剪工作兼容。为了扩展针对深SNN的修剪技术,我们研究了彩票假说(LTH),该假说(LTH)指出,密集的网络包含较小的子网络(即获胜的票),这些子网与密集网络相当。我们对LTH的研究表明,获胜的门票始终存在于各种数据集和体系结构的深SNN中,可提供多达97%的稀疏性,而没有巨大的性能降级。但是,LTH的迭代搜索过程与SNN的多个时间段相结合时,带来了巨大的培训计算成本。为了减轻这种沉重的搜索成本,我们提出了早期(ET)票,从而从少量的时间步中找到重要的重量连接性。提出的ET票可以与常见的修剪技术无缝结合,以查找获胜门票,例如迭代级修剪(IMP)和早鸟(EB)门票。我们的实验结果表明,与IMP或EB方法相比,提出的ET票可将搜索时间缩短多达38%。
translated by 谷歌翻译
尖峰神经网络(SNN)由于其固有的高表象激活而引起了传统人工神经网络(ANN)的势能有效替代品。但是,大多数先前的SNN方法都使用类似Ann的架构(例如VGG-NET或RESNET),这可以为SNN中二进制信息的时间序列处理提供亚最佳性能。为了解决这个问题,在本文中,我们介绍了一种新型的神经体系结构搜索(NAS)方法,以找到更好的SNN体系结构。受到最新的NAS方法的启发,这些方法从初始化时从激活模式中找到了最佳体系结构,我们选择了可以代表不同数据样本的不同尖峰激活模式的体系结构,而无需训练。此外,为了进一步利用尖峰之间的时间信息,我们在层之间搜索馈电的连接以及向后连接(即时间反馈连接)。有趣的是,我们的搜索算法发现的SNASNET通过向后连接实现了更高的性能,这表明设计SNN体系结构以适当使用时间信息的重要性。我们对三个图像识别基准进行了广泛的实验,我们表明SNASNET可以实现最新的性能,而时间段明显较低(5个时间段)。代码可在GitHub上找到。
translated by 谷歌翻译
我们如何为神经系统带来隐私和能效?在本文中,我们提出了PrivateNN,旨在从预先训练的ANN模型构建低功耗尖峰神经网络(SNNS),而不会泄漏包含在数据集中的敏感信息。在这里,我们解决两种类型的泄漏问题:1)当网络在Ann-SNN转换过程中访问真实训练数据时,会导致数据泄漏。 2)当类相关的特征可以从网络参数重建时,会导致类泄漏。为了解决数据泄漏问题,我们从预先培训的ANN生成合成图像,并使用所生成的图像将ANN转换为SNNS。然而,转换的SNNS仍然容易受到类泄漏的影响,因为权重参数相对于ANN参数具有相同的(或缩放)值。因此,通过训练SNNS,通过训练基于时间尖峰的学习规则来加密SNN权重。使用时间数据更新权重参数使得SNN难以在空间域中解释。我们观察到,加密的私人没有消除数据和类泄漏问题,略微的性能下降(小于〜2),与标准ANN相比,与标准ANN相比的显着的能效增益(约55倍)。我们对各种数据集进行广泛的实验,包括CiFar10,CiFar100和Tinyimagenet,突出了隐私保留的SNN培训的重要性。
translated by 谷歌翻译
The one-inclusion graph algorithm of Haussler, Littlestone, and Warmuth achieves an optimal in-expectation risk bound in the standard PAC classification setup. In one of the first COLT open problems, Warmuth conjectured that this prediction strategy always implies an optimal high probability bound on the risk, and hence is also an optimal PAC algorithm. We refute this conjecture in the strongest sense: for any practically interesting Vapnik-Chervonenkis class, we provide an in-expectation optimal one-inclusion graph algorithm whose high probability risk bound cannot go beyond that implied by Markov's inequality. Our construction of these poorly performing one-inclusion graph algorithms uses Varshamov-Tenengolts error correcting codes. Our negative result has several implications. First, it shows that the same poor high-probability performance is inherited by several recent prediction strategies based on generalizations of the one-inclusion graph algorithm. Second, our analysis shows yet another statistical problem that enjoys an estimator that is provably optimal in expectation via a leave-one-out argument, but fails in the high-probability regime. This discrepancy occurs despite the boundedness of the binary loss for which arguments based on concentration inequalities often provide sharp high probability risk bounds.
translated by 谷歌翻译
从视频中估算心率可以通过患者护理,人类互动和运动中的应用进行非接触健康监测。现有的工作可以通过面部跟踪在一定程度的运动下稳健地测量心率。但是,在不受约束的设置中,这并不总是可以的,因为脸部可能会被遮住甚至在相机外面。在这里,我们介绍Intensephysio:具有挑战性的视频心率估计数据集,具有逼真的面部阻塞,严重的主题运动和充足的心率变化。为了确保在现实环境中的心率变化,我们记录每个主题约1-2小时。该受试者正在用附着的摄像机进行骑自行车计(以中等强度)锻炼(中度至高强度),没有关于定位或运动的指示。我们有11个主题,大约有20个小时的视频。我们表明,现有的远程照相拍摄方法在这种情况下估计心率很难。此外,我们提出了IBIS-CNN,这是一种使用时空超级像素的新基线,它通过消除了对可见面/面部跟踪的需求来改善现有模型。我们将尽快公开提供代码和数据。
translated by 谷歌翻译
我们提出了Covy - 一个机器人平台,可在Covid-19等大流行期间促进社会疏远。Covy具有一种新颖的复合视觉系统,使其能够检测到社会距离的破坏,最多可达16m。Covy使用混合导航堆栈自动地导航其周围环境,该堆栈结合了深钢筋学习(DRL)和概率定位方法。我们通过模拟和现实环境中的大量实验构建了完整的系统并评估了Covy的性能。除其他外,我们的结果表明,与基于DRL的纯解决方案相比,混合导航堆栈更强大。
translated by 谷歌翻译
In the classical setting of self-selection, the goal is to learn $k$ models, simultaneously from observations $(x^{(i)}, y^{(i)})$ where $y^{(i)}$ is the output of one of $k$ underlying models on input $x^{(i)}$. In contrast to mixture models, where we observe the output of a randomly selected model, here the observed model depends on the outputs themselves, and is determined by some known selection criterion. For example, we might observe the highest output, the smallest output, or the median output of the $k$ models. In known-index self-selection, the identity of the observed model output is observable; in unknown-index self-selection, it is not. Self-selection has a long history in Econometrics and applications in various theoretical and applied fields, including treatment effect estimation, imitation learning, learning from strategically reported data, and learning from markets at disequilibrium. In this work, we present the first computationally and statistically efficient estimation algorithms for the most standard setting of this problem where the models are linear. In the known-index case, we require poly$(1/\varepsilon, k, d)$ sample and time complexity to estimate all model parameters to accuracy $\varepsilon$ in $d$ dimensions, and can accommodate quite general selection criteria. In the more challenging unknown-index case, even the identifiability of the linear models (from infinitely many samples) was not known. We show three results in this case for the commonly studied $\max$ self-selection criterion: (1) we show that the linear models are indeed identifiable, (2) for general $k$ we provide an algorithm with poly$(d) \exp(\text{poly}(k))$ sample and time complexity to estimate the regression parameters up to error $1/\text{poly}(k)$, and (3) for $k = 2$ we provide an algorithm for any error $\varepsilon$ and poly$(d, 1/\varepsilon)$ sample and time complexity.
translated by 谷歌翻译
通过定向消息传递通过方向消息通过的图形神经网络最近在多个分子特性预测任务上设置了最先进的技术。然而,它们依赖于通常不可用的原子位置信息,并获得它通常非常昂贵甚至不可能。在本文中,我们提出了合成坐标,使得能够使用高级GNN而不需要真正的分子配置。我们提出了两个距离作为合成坐标:使用个性化PageRank的对称变体指定分子配置的粗糙范围和基于图的距离的距离界限。为了利用距离和角度信息,我们提出了一种将正常图形神经网络转换为定向MPNN的方法。我们表明,通过这种转变,我们可以将正常图形神经网络的误差减少55%在锌基准。我们还通过在SMP和DimeNet ++模型中纳入合成坐标,在锌和自由QM9上设定了最新技术。我们的实现可在线获取。
translated by 谷歌翻译